Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[webgpu] support Pad operator #23141

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open

[webgpu] support Pad operator #23141

wants to merge 3 commits into from

Conversation

xhcao
Copy link
Contributor

@xhcao xhcao commented Dec 18, 2024

Description

Motivation and Context

@xhcao
Copy link
Contributor Author

xhcao commented Dec 18, 2024

@jchen10 @hujiajie PTAL, thanks

@xhcao xhcao marked this pull request as ready for review December 18, 2024 11:29
@guschmue guschmue added the ep:WebGPU ort-web webgpu provider label Dec 19, 2024
@guschmue
Copy link
Contributor

/azp run ONNX Runtime Web CI Pipeline,Windows GPU CI Pipeline,Linux Android Emulator QNN CI Pipeline

@guschmue
Copy link
Contributor

/azp run Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline

@guschmue
Copy link
Contributor

/azp run Windows GPU TensorRT CI Pipeline,onnxruntime-binary-size-checks-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,Windows x64 QNN CI Pipeline,Big Models

Copy link

Azure Pipelines successfully started running 2 pipeline(s).

@guschmue
Copy link
Contributor

/azp run Windows GPU CUDA CI Pipeline,Windows GPU DML CI Pipeline,Windows GPU Doc Gen CI Pipeline

Copy link

Azure Pipelines successfully started running 4 pipeline(s).

Copy link

Azure Pipelines successfully started running 3 pipeline(s).

Copy link

Azure Pipelines successfully started running 9 pipeline(s).

@xhcao
Copy link
Contributor Author

xhcao commented Dec 20, 2024

@fs-eire @guschmue Please help to trigger the bots again. Last version failed on Mac OS, but could compile correctly on Windows. I had changed the code, but not ensured it worked correctly on Mac OS. The compiling error was shown as below.
/Users/runner/work/1/s/onnxruntime/core/providers/webgpu/tensor/pad.h:16:54: error: member initializer 'Program' does not name a non-static data member or base class PadProgram(const Mode mode, bool dim_value_zero) : Program{"Pad"}, mode_{mode}, dim_value_zero_{dim_value_zero} {}

@guschmue
Copy link
Contributor

/azp run ONNX Runtime Web CI Pipeline,Windows GPU CI Pipeline,Linux Android Emulator QNN CI Pipeline

@guschmue
Copy link
Contributor

/azp run Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline

@guschmue
Copy link
Contributor

/azp run Windows GPU TensorRT CI Pipeline,onnxruntime-binary-size-checks-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,Windows x64 QNN CI Pipeline,Big Models

Copy link

Azure Pipelines successfully started running 2 pipeline(s).

@guschmue
Copy link
Contributor

/azp run Windows GPU CUDA CI Pipeline,Windows GPU DML CI Pipeline,Windows GPU Doc Gen CI Pipeline

Copy link

Azure Pipelines successfully started running 4 pipeline(s).

Copy link

Azure Pipelines successfully started running 3 pipeline(s).

Copy link

Azure Pipelines successfully started running 9 pipeline(s).

.InputMemoryType(OrtMemTypeCPUInput, 1) \
.InputMemoryType(OrtMemTypeCPUInput, 2) \
.InputMemoryType(OrtMemTypeCPUInput, 3) \
.TypeConstraint("T", DataTypeImpl::GetTensorType<T>()), \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems you needn't have bothered with all the specialized stuff if you had used WebGpuSupportedNumberTypes() like this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pad is a template class, it should transfer template type when registering. I am not sure whether WebGpuSupportedNumberTypes() works correctly. I had referred CUDA EP.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @fs-eire, as @jchen10 prefers to register the kernel using WebGpuSupportedNumberTypes(), according to the input element type to infer the type of padValue when running the kernel, also dynamically add uniforms as main...jchen10:onnxruntime:tmp
I use template class and only want to it as other EPs, take CUDA EP for an example.
What are your comments here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@guschmue @fs-eire
My proposal is just an alternative to get the uniform type at runtime, so that we don't need to bother with the specialized template kernel class registrations. It just a minor change. If it's not beneficial enough in your view, let's keep the current solution and unblock this PR. Feel free to comment. I am okay either way.

Copy link
Contributor

@fs-eire fs-eire Jan 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

based on today's understanding, I would suggest to use untemplated class which optimizes for binary size. vote for using WebGpuSupportedNumberTypes().

CUDA EP uses template class because nvcc can use that information to simplify the implementation. however for WebGPU, we are shader based so compiler does not really take the advantage of the template type.

@guschmue
Copy link
Contributor

/azp run Win_TRT_Minimal_CUDA_Test_CI

Copy link

Azure Pipelines successfully started running 1 pipeline(s).


const PadsVector* p_pads = &pads_;
const PadsVector* p_slices = &slices_;
WebGpuT value = ToWebGpuType<T>::FromFloat(value_);
Copy link
Contributor

@fs-eire fs-eire Jan 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would recommend to avoid converting the value f32 -> f16 here.

When value_ is being used, it means the model is a very old one - only opset 10 and below uses "value" from attributes. The type of attribute "value" is always float.

On opset >= 11, the value comes from 3rd input (ie. inputs[2]). the type of the value matches the input data (ie. inputs[0]).

My suggestion is to always use a u32 uniform to carry the value:

  • for opset <=10, the value of this uniform is always a bitwise representation of the float number
  • for opset > 10, the value of this uniform is always a bitwise representation of the corresponding type T (padding 2-bytes-of-0 for f16)

Inside WGSL, use type cast or bitcast to get the const value.

This helps with easier implementation of untemplated class.
This also helps to make it easier to support Android/iOS in future, considering most mobile devices does not support f16 in uniforms yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:WebGPU ort-web webgpu provider
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants